Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
3rd International Symposium on Instrumentation, Control, Artificial Intelligence, and Robotics, ICA-SYMP 2023 ; : 127-130, 2023.
Article in English | Scopus | ID: covidwho-2275520

ABSTRACT

One of the difficult challenges in AI development is to make machine understand the human feeling through expression because human can express feeling in various ways, for example, through voices, facial actions or behaviors. Facial Emotion Recognition (FER) has been used in interrogating suspects and being a tool to help detect emotions in people with nerve damage or even in the COVID-19 pandemic when patients hide their timelines. It can be applied to detect lies through micro expression. In this work will mainly focus on FER. The results of Deep Neural Network (DNN), Convolutional Neural Network (CNN), and Vision Transformer were compared. Human emotion expressions were classified by using facial expression datasets from AffectNet, Tsinghua, Extended Cohn Kanade (CK+), Karolinska Directed Emotional Faces (KDEF) and Real-world Affective Faces (RAF). Finally, all models were evaluated on the testing dataset to confirm their performance. The result shows that Vision Transformer model outperforms other models. © 2023 IEEE.

2.
Sensors (Basel) ; 22(20)2022 Oct 21.
Article in English | MEDLINE | ID: covidwho-2082200

ABSTRACT

Human ideas and sentiments are mirrored in facial expressions. They give the spectator a plethora of social cues, such as the viewer's focus of attention, intention, motivation, and mood, which can help develop better interactive solutions in online platforms. This could be helpful for children while teaching them, which could help in cultivating a better interactive connect between teachers and students, since there is an increasing trend toward the online education platform due to the COVID-19 pandemic. To solve this, the authors proposed kids' emotion recognition based on visual cues in this research with a justified reasoning model of explainable AI. The authors used two datasets to work on this problem; the first is the LIRIS Children Spontaneous Facial Expression Video Database, and the second is an author-created novel dataset of emotions displayed by children aged 7 to 10. The authors identified that the LIRIS dataset has achieved only 75% accuracy, and no study has worked further on this dataset in which the authors have achieved the highest accuracy of 89.31% and, in the authors' dataset, an accuracy of 90.98%. The authors also realized that the face construction of children and adults is different, and the way children show emotions is very different and does not always follow the same way of facial expression for a specific emotion as compared with adults. Hence, the authors used 3D 468 landmark points and created two separate versions of the dataset from the original selected datasets, which are LIRIS-Mesh and Authors-Mesh. In total, all four types of datasets were used, namely LIRIS, the authors' dataset, LIRIS-Mesh, and Authors-Mesh, and a comparative analysis was performed by using seven different CNN models. The authors not only compared all dataset types used on different CNN models but also explained for every type of CNN used on every specific dataset type how test images are perceived by the deep-learning models by using explainable artificial intelligence (XAI), which helps in localizing features contributing to particular emotions. The authors used three methods of XAI, namely Grad-CAM, Grad-CAM++, and SoftGrad, which help users further establish the appropriate reason for emotion detection by knowing the contribution of its features in it.


Subject(s)
COVID-19 , Deep Learning , Adult , Child , Animals , Humans , Artificial Intelligence , Pandemics , Emotions
3.
3rd International Conference for Emerging Technology, INCET 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2018891

ABSTRACT

In the Covid-19 age, we are becoming increasingly reliant on virtual interactions like as Zoom and Google meetings / Teams chat. The videos received from live webcamera in virtual interactions become great source for researchers to understand the human emotions. Due to the numerous applications in human-computer interaction, analysis of emotion from facial expressions has piqued the interest of the newest research community (HCI). The primary objective of this study is to assess various emotions using unique facial expressions captured via a live web camera. Traditional approaches (Conventional FER) rely on manual feature extraction before classifying the emotional state, whereas Deep Learning, Convolutional Neural Networks, and Transfer Learning are now widely used for emotional classification due to their advanced feature extraction mechanisms from images. In this implementation, we will use the most advanced deep learning models, MTCNN and VGG-16, to extract features and classify seven distinct emotions based on their facial landmarks in live video. Using the FER2013 standard dataset, we achieved a maximum accuracy of 97.23 percent for training and 60.2 percent for validation for emotion classification. © 2022 IEEE.

4.
Ieee Access ; 9:165806-165840, 2021.
Article in English | Web of Science | ID: covidwho-1621792

ABSTRACT

Facial expressions are mirrors of human thoughts and feelings. It provides a wealth of social cues to the viewer, including the focus of attention, intention, motivation, and emotion. It is regarded as a potent tool of silent communication. Analysis of these expressions gives a significantly more profound insight into human behavior. AI-based Facial Expression Recognition (FER) has become one of the crucial research topics in recent years, with applications in dynamic analysis, pattern recognition, interpersonal interaction, mental health monitoring, and many more. However, with the global push towards online platforms due to the Covid-19 pandemic, there has been a pressing need to innovate and offer a new FER analysis framework with the increasing visual data generated by videos and photographs.Furthermore, the emotion-wise facial expressions of kids, adults, and senior citizens vary, which must also be considered in the FER research. Lots of research work has been done in this area. However, it lacks a comprehensive overview of the literature that showcases the past work done and provides the aligned future directions. In this paper, the authors have provided a comprehensive evaluation of AI-based FER methodologies, including datasets, feature extraction techniques, algorithms, and the recent breakthroughs with their applications in facial expression identification. To the best of the author's knowledge, this is the only review paper stating all aspects of FER for various age brackets and would significantly impact the research community in the coming years.

SELECTION OF CITATIONS
SEARCH DETAIL